Convex Relaxations for Markov Random Field MAP estimation

نویسنده

  • Timothee Cour
چکیده

Markov Random Fields (MRF) are commonly used in computer vision and maching learning applications to model interactions of interdependant variables. Finding the Maximum Aposteriori (MAP) solution of an MRF is in general intractable, and one has to resort to approximate solutions. We review some of the recent literature on convex relaxations for MAP estimation. Our starting point is to notice the MAP estimation (a discrete problem) is in fact equivalent to a real-valued but non-convex Quadratic Program (QP). We reformulate some of those relaxations and see that we can distinguish two main strategies: 1) optimize a convex upper-bound of the (non-convex) cost function (L2QP, CQP, our spectral relaxation); 2) reformulate as a linear objective using lift-and-project and optimize over a convex upper-bound of the (non-convex) feasible set (SDP, SOCP, LP relaxations). We analyse these relaxations according to the following criteria: optimality conditions, relative dominance relationships, multiplicative/additive bounds on the quality of the approximation, ability to handle arbitrary clique size, space/time complexity and convergence guarantees. We will show a few surprising results, such as the equivalence between the CQP relaxation (a quadratic program) and the SOCP relaxation (containing a linear objective), and furthermore show that a large set of SOCP constraints are implied by the local marginalization constraint. Along the way, we also contribute a few new results. The first one is a 1 kc−1 multiplicative approximation bound for an MRF with arbitrary clique size c and k labels, in the general case (extending the pairwise case c = 2). The second one is a tighter additive bound for CQP and LP relaxation in the general case (with k = 2 labels), that also has the big advantage of being invariant to reparameterizations. The new bound involves a modularity norm instead of an `1 norm. We also show that a multiplicative bound δ for the LP relaxation would imply δ ≤ 1 2 (for k = 2), putting LP on par with other convex relaxations such as L2QP. Finally we characterize the equivalence classes of a (broader) class of reparameterizations, show their dimension, and how a basis can be used to generate potentially tigher relaxations. We believe those contributions are novel.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

An Analysis of Convex Relaxations for MAP Estimation of Discrete MRFs

The problem of obtaining the maximum a posteriori estimate of a general discrete Markov random field (i.e., a Markov random field defined using a discrete set of labels) is known to be NP-hard. However, due to its central importance in many applications, several approximation algorithms have been proposed in the literature. In this paper, we present an analysis of three such algorithms based on...

متن کامل

Tightness Results for Local Consistency Relaxations in Continuous MRFs

Finding the MAP assignment in graphical models is a challenging task that generally requires approximations. One popular approximation approach is to use linear programming relaxations that enforce local consistency. While these are commonly used for discrete variable models, they are much less understood for models with continuous variables. Here we define local consistency relaxations of MAP ...

متن کامل

Inconsistent parameter estimation in Markov random fields: Benefits in the computation-limited setting

Consider the problem of joint parameter estimation and prediction in a Markov random field: i.e., the model parameters are estimated on the basis of an initial set of data, and then the fitted model is used to perform prediction (e.g., smoothing, denoising, interpolation) on a new noisy observation. Working under the restriction of limited computation, we analyze a joint method in which the sam...

متن کامل

Estimating the wrong Markov random field: Benefits in the computation-limited setting

Consider the problem of joint parameter estimation and prediction in a Markov random field: i.e., the model parameters are estimated on the basis of an initial set of data, and then the fitted model is used to perform prediction (e.g., smoothing, denoising, interpolation) on a new noisy observation. Working in the computation-limited setting, we analyze a joint method in which the same convex v...

متن کامل

Estimating the "Wrong" Graphical Model: Benefits in the Computation-Limited Setting

Consider the problem of joint parameter estimation and prediction in a Markov random field: that is, the model parameters are estimated on the basis of an initial set of data, and then the fitted model is used to perform prediction (e.g., smoothing, denoising, interpolation) on a new noisy observation. Working under the restriction of limited computation, we analyze a joint method in which the ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2008